24,135 research outputs found

    A Minimal Architecture for General Cognition

    Full text link
    A minimalistic cognitive architecture called MANIC is presented. The MANIC architecture requires only three function approximating models, and one state machine. Even with so few major components, it is theoretically sufficient to achieve functional equivalence with all other cognitive architectures, and can be practically trained. Instead of seeking to transfer architectural inspiration from biology into artificial intelligence, MANIC seeks to minimize novelty and follow the most well-established constructs that have evolved within various sub-fields of data science. From this perspective, MANIC offers an alternate approach to a long-standing objective of artificial intelligence. This paper provides a theoretical analysis of the MANIC architecture.Comment: 8 pages, 8 figures, conference, Proceedings of the 2015 International Joint Conference on Neural Network

    Reducing the Effects of Detrimental Instances

    Full text link
    Not all instances in a data set are equally beneficial for inducing a model of the data. Some instances (such as outliers or noise) can be detrimental. However, at least initially, the instances in a data set are generally considered equally in machine learning algorithms. Many current approaches for handling noisy and detrimental instances make a binary decision about whether an instance is detrimental or not. In this paper, we 1) extend this paradigm by weighting the instances on a continuous scale and 2) present a methodology for measuring how detrimental an instance may be for inducing a model of the data. We call our method of identifying and weighting detrimental instances reduced detrimental instance learning (RDIL). We examine RIDL on a set of 54 data sets and 5 learning algorithms and compare RIDL with other weighting and filtering approaches. RDIL is especially useful for learning algorithms where every instance can affect the classification boundary and the training instances are considered individually, such as multilayer perceptrons trained with backpropagation (MLPs). Our results also suggest that a more accurate estimate of which instances are detrimental can have a significant positive impact for handling them.Comment: 6 pages, 5 tables, 2 figures. arXiv admin note: substantial text overlap with arXiv:1403.189

    Risk, Managerial Skill and Closed-End Fund Discounts

    Get PDF
    Empirical evidence from the UK market is brought to bear on recent theories of closed-end fund discounts. Market pricing of skill, relative to the fees charged for it, accounts for a significant portion of discount variation, but cannot explain the rarity of index funds or why they trade at a discount. Index funds have lower discount volatility. Discount risk is much more systematic on international than on domestic funds. It is argued that even idiosyncratic risk is priced in closed-end funds, because they are likely to represent a significant proportion of investors’ risky portfolios.Closed-end fund; fund management; systematic risk

    Missing Value Imputation With Unsupervised Backpropagation

    Full text link
    Many data mining and data analysis techniques operate on dense matrices or complete tables of data. Real-world data sets, however, often contain unknown values. Even many classification algorithms that are designed to operate with missing values still exhibit deteriorated accuracy. One approach to handling missing values is to fill in (impute) the missing values. In this paper, we present a technique for unsupervised learning called Unsupervised Backpropagation (UBP), which trains a multi-layer perceptron to fit to the manifold sampled by a set of observed point-vectors. We evaluate UBP with the task of imputing missing values in datasets, and show that UBP is able to predict missing values with significantly lower sum-squared error than other collaborative filtering and imputation techniques. We also demonstrate with 24 datasets and 9 supervised learning algorithms that classification accuracy is usually higher when randomly-withheld values are imputed using UBP, rather than with other methods

    Formula SAE Intake System

    Get PDF
    The Formula SAE Intake System is intended to optimize the airflow into a restricted 600cc engine. The intake system is designed, fabricated, and installed in accordance with the FSAE rule book with a focus on maximizing the vehicle’s acceleration. It is directly responsible for determining the drivability of the car and how much horsepower the flow restricted engine produces. Design of the intake was conceptualized based on researching a number of factors including venturi diffusion angles, plenum volumes, and runner lengths. Initial tests were performed utilizing computational fluid dynamics for a total of 367 flow simulations and 261 running hours on various intake components in SolidWorks Flow Simulation 2016. From the beginning, it was known that the intake needed to possess certain contours that would be very difficult to create with sheet metal. It was for this reason that a composite construction was pursued for the plenum of the intake manifold, utilizing fused deposition modeling to form the mold. Flow testing and dynamometer testing will be utilized to verify the effectiveness of the design. In the end, the intake system will provide peak performance in the flow restricted system. The increased brake horsepower and improved vehicle drivability will provide a competitive advantage on any race course.https://scholarscompass.vcu.edu/capstone/1198/thumbnail.jp

    Asset Pricing with Observable Stochastic Discount Factors.

    Get PDF
    The stochastic discount factor model provides a general framework for pricing assets. By specifying the discount factor suitably it encompasses most of the theories currently in use, including CAPM and consumption CAPM. The SDF model has been based on the use of single and multiple factors, and on latent and observed factors. In most situations, and especially for the term structure, single factor models are inappropriate, whilst latent variables require the somewhat arbitrary specification of generating processes and are difficult to interpret. In this paper we survey the principal different implementations of the SDF model for FOREX, equity and bonds and we propose a new approach. This is based on the use of multiple factors that are observable and modelling the joint distribution of excess returns and the factors using a multi-variate GARCH-in-mean process. We argue that in general single equation and VAR models, although widely used in empirical finance, are inappropriate as they do not satisfy the no-arbitrage condition. Since risk premia arise from conditional covariation between returns and the factors, both a multi-variate context and having conditional covariances in the conditional mean process, is essential. We explain how apparent exceptions, such as the CIR and Vasicek models, in fact meet this requirement - but at a price. We explain our new approach, discuss how it might be implemented and present some empirical evidence, mainly from our own researches. Partly, to enable comparisons to be made, the survey also includes evidence from recent empirical work using more traditional approaches.Asset Pricing; Stochastic Discount Factors; Forex; Equity Term Structure; Affine Factor Models; Consumption CAPM; Financial Econometrics; GARCH
    • …
    corecore